测试时间适应利用测试输入,以提高对源数据进行训练的模型的准确性,这些模型在转移的目标数据上进行了测试。现有方法通过(重新)对每个目标域进行培训来更新源模型。虽然有效,但重新训练对数据的数量和顺序和优化的超参数敏感。相反,我们通过使用生成扩散模型将所有测试输入投影到源域来更新目标数据。我们的扩散驱动的适应方法DDA共享其在所有领域的分类和生成模型。两种模型均在源域上训练,然后在测试过程中固定。我们通过图像指导和自我缩放来自动决定适应多少。 DDA的输入适应比在Imagenet-C基准上的各种损坏,体系结构和数据制度中的先前模型适应方法更强大。借助其输入更新,DDA成功了,在小批次中的数据中,模型适应性降低了,以较少的数据降低,以非统一顺序或具有多个损坏的混合数据降低。
translated by 谷歌翻译
自我监督学习(SSL)的承诺是利用大量未标记的数据来解决复杂的任务。尽管简单,图像级学习取得了出色的进步,但最新方法显示出包括图像结构知识的优势。但是,通过引入手工制作的图像分割来定义感兴趣的区域或专门的增强策略,这些方法牺牲了使SSL如此强大的简单性和通用性。取而代之的是,我们提出了一个自我监督的学习范式,该学习范式本身会发现这种图像结构。我们的方法,ODIN,夫妻对象发现和表示网络,以发现有意义的图像分割,而无需任何监督。由此产生的学习范式更简单,更易碎,更一般,并且取得了最先进的转移学习结果,以进行对象检测和实例对可可的细分,以及对Pascal和CityScapes的语义细分,同时超过监督的预先培训,用于戴维斯的视频细分。
translated by 谷歌翻译
在测试时间进行优化的自适应防御能力有望改善对抗性鲁棒性。我们对这种自适应测试时间防御措施进行分类,解释其潜在的好处和缺点,并评估图像分类的最新自适应防御能力的代表性。不幸的是,经过我们仔细的案例研究评估时,没有任何显着改善静态防御。有些甚至削弱了基本静态模型,同时增加了推理计算。尽管这些结果令人失望,但我们仍然认为自适应测试时间防御措施是一项有希望的研究途径,因此,我们为他们的彻底评估提供了建议。我们扩展了Carlini等人的清单。(2019年)通过提供针对自适应防御的具体步骤。
translated by 谷歌翻译
General perception systems such as Perceivers can process arbitrary modalities in any combination and are able to handle up to a few hundred thousand inputs. They achieve this generality by using exclusively global attention operations. This however hinders them from scaling up to the inputs sizes required to process raw high-resolution images or video. In this paper, we show that some degree of locality can be introduced back into these models, greatly improving their efficiency while preserving their generality. To scale them further, we introduce a self-supervised approach that enables learning dense low-dimensional positional embeddings for very large signals. We call the resulting model a Hierarchical Perceiver (HiP). In sum our contributions are: 1) scaling Perceiver-type models to raw high-resolution images and audio+video, 2) showing the feasibility of learning 1M+ positional embeddings from scratch using masked auto-encoding, 3) demonstrating competitive performance on raw data from ImageNet, AudioSet, PASCAL VOC, ModelNet40 and Kinetics datasets with the same exact, unchanged model and without specialized preprocessing or any tokenization.
translated by 谷歌翻译
最近的图像识别进展刺激了以前所未有的规模部署视觉系统。因此,目前的数据通常不仅由人类而且由机器消耗。现有的图像处理方法仅优化以获得更好的人类感知,但是可以通过机器准确地识别所得到的图像。这可以是不希望的,例如,可以通过搜索引擎或推荐系统来处理图像。在这项工作中,我们研究了简单的方法来提高处理图像的机器识别:直接在图像处理网络上或通过中间变换模型优化识别损耗。有趣的是,加工模型加强识别质量的能力可以在评估不同架构,公认的类别,任务和训练数据集的模型时传输。这使得即使在我们没有未来识别模型的知识,例如,如果我们将被处理的图像上传到Internet的情况,也使得这些方法适用。我们对多种图像处理任务进行实验,用Imagenet分类和Pascal VOC检测作为识别任务。利用这些简单但有效的方法,可以通过强大的可转移性和最小的图像质量损失来实现大量的精度增益。通过用户学习,我们进一步表明精度增益可以转移到黑盒云模型。最后,我们试图通过展示不同模型决策边界的相似之处来解释这种可转移性现象。代码可在https://github.com/liuzhuang13/transferable_ra获得。
translated by 谷歌翻译
Visual recognition requires rich representations that span levels from low to high, scales from small to large, and resolutions from fine to coarse. Even with the depth of features in a convolutional network, a layer in isolation is not enough: compounding and aggregating these representations improves inference of what and where. Architectural efforts are exploring many dimensions for network backbones, designing deeper or wider architectures, but how to best aggregate layers and blocks across a network deserves further attention. Although skip connections have been incorporated to combine layers, these connections have been "shallow" themselves, and only fuse by simple, one-step operations. We augment standard architectures with deeper aggregation to better fuse information across layers. Our deep layer aggregation structures iteratively and hierarchically merge the feature hierarchy to make networks with better accuracy and fewer parameters. Experiments across architectures and tasks show that deep layer aggregation improves recognition and resolution compared to existing branching and merging schemes.
translated by 谷歌翻译
Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying generalpurpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU (≈ 2.5 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments.Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.
translated by 谷歌翻译
The demand of high-resolution video contents has grown over the years. However, the delivery of high-resolution video is constrained by either computational resources required for rendering or network bandwidth for remote transmission. To remedy this limitation, we leverage the eye trackers found alongside existing augmented and virtual reality headsets. We propose the application of video super-resolution (VSR) technique to fuse low-resolution context with regional high-resolution context for resource-constrained consumption of high-resolution content without perceivable drop in quality. Eye trackers provide us the gaze direction of a user, aiding us in the extraction of the regional high-resolution context. As only pixels that falls within the gaze region can be resolved by the human eye, a large amount of the delivered content is redundant as we can't perceive the difference in quality of the region beyond the observed region. To generate a visually pleasing frame from the fusion of high-resolution region and low-resolution region, we study the capability of a deep neural network of transferring the context of the observed region to other regions (low-resolution) of the current and future frames. We label this task a Foveated Video Super-Resolution (FVSR), as we need to super-resolve the low-resolution regions of current and future frames through the fusion of pixels from the gaze region. We propose Cross-Resolution Flow Propagation (CRFP) for FVSR. We train and evaluate CRFP on REDS dataset on the task of 8x FVSR, i.e. a combination of 8x VSR and the fusion of foveated region. Departing from the conventional evaluation of per frame quality using SSIM or PSNR, we propose the evaluation of past foveated region, measuring the capability of a model to leverage the noise present in eye trackers during FVSR. Code is made available at https://github.com/eugenelet/CRFP.
translated by 谷歌翻译
As language models (LMs) scale, they develop many novel behaviors, good and bad, exacerbating the need to evaluate how they behave. Prior work creates evaluations with crowdwork (which is time-consuming and expensive) or existing data sources (which are not always available). Here, we automatically generate evaluations with LMs. We explore approaches with varying amounts of human effort, from instructing LMs to write yes/no questions to making complex Winogender schemas with multiple stages of LM-based generation and filtering. Crowdworkers rate the examples as highly relevant and agree with 90-100% of labels, sometimes more so than corresponding human-written datasets. We generate 154 datasets and discover new cases of inverse scaling where LMs get worse with size. Larger LMs repeat back a dialog user's preferred answer ("sycophancy") and express greater desire to pursue concerning goals like resource acquisition and goal preservation. We also find some of the first examples of inverse scaling in RL from Human Feedback (RLHF), where more RLHF makes LMs worse. For example, RLHF makes LMs express stronger political views (on gun rights and immigration) and a greater desire to avoid shut down. Overall, LM-written evaluations are high-quality and let us quickly discover many novel LM behaviors.
translated by 谷歌翻译
Automated slicing aims to identify subsets of evaluation data where a trained model performs anomalously. This is an important problem for machine learning pipelines in production since it plays a key role in model debugging and comparison, as well as the diagnosis of fairness issues. Scalability has become a critical requirement for any automated slicing system due to the large search space of possible slices and the growing scale of data. We present Autoslicer, a scalable system that searches for problematic slices through distributed metric computation and hypothesis testing. We develop an efficient strategy that reduces the search space through pruning and prioritization. In the experiments, we show that our search strategy finds most of the anomalous slices by inspecting a small portion of the search space.
translated by 谷歌翻译